Loop Estimator for Discounted Values in Markov Reward Processes

نویسندگان

چکیده

At the working heart of policy iteration algorithms commonly used and studied in discounted setting reinforcement learning, evaluation step estimates value states with samples from a Markov reward process induced by following decision process. We propose simple efficient estimator called loop that exploits regenerative structure processes without explicitly estimating full model. Our method enjoys space complexity O(1) when single positive recurrent state s unlike TD O(S) or model-based methods O(S^2). Moreover, enables us to show, relying on generative model approach, has an instance-dependent convergence rate O~(\sqrt{\tau_s/T}) over steps T sample path, where \tau_s is maximal expected hitting time s. In preliminary numerical experiments, outperforms model-free methods, such as TD(k), competitive estimator.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accelerated decomposition techniques for large discounted Markov decision processes

Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorith...

متن کامل

Perceptive Evaluation for the Optimal Discounted Reward in Markov Decision Processes

We formulate a fuzzy perceptive model for Markov decision processes with discounted payoff in which the perception for transition probabilities is described by fuzzy sets. Our aim is to evaluate the optimal expected reward, which is called a fuzzy perceptive value, based on the perceptive analysis. It is characterized and calculated by a certain fuzzy relation. A machine maintenance problem is ...

متن کامل

A Modified Policy Iteration Algorithm for Discounted Reward Markov Decision Processes

The running time of the classical algorithms of the Markov Decision Process (MDP) typically grows linearly with the state space size, which makes them frequently intractable. This paper presents a Modified Policy Iteration algorithm to compute an optimal policy for large Markov decision processes in the discounted reward criteria and under infinite horizon. The idea of this algorithm is based o...

متن کامل

Discounted Markov decision processes with utility constraints

-We consider utility-constrained Markov decision processes. The expected utility of the total discounted reward is maximized subject to multiple expected utility constraints. By introducing a corresponding Lagrange function, a saddle-point theorem of the utility constrained optimization is derived. The existence of a constrained optimal policy is characterized by optimal action sets specified w...

متن کامل

Simplex Algorithm for Countable-State Discounted Markov Decision Processes

We consider discounted Markov Decision Processes (MDPs) with countably-infinite statespaces, finite action spaces, and unbounded rewards. Typical examples of such MDPs areinventory management and queueing control problems in which there is no specific limit on thesize of inventory or queue. Existing solution methods obtain a sequence of policies that convergesto optimality i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i8.16881